15 research outputs found
Scalable Certified Segmentation via Randomized Smoothing
We present a new certification method for image and point cloud segmentation
based on randomized smoothing. The method leverages a novel scalable algorithm
for prediction and certification that correctly accounts for multiple testing,
necessary for ensuring statistical guarantees. The key to our approach is
reliance on established multiple-testing correction mechanisms as well as the
ability to abstain from classifying single pixels or points while still
robustly segmenting the overall input. Our experimental evaluation on synthetic
data and challenging datasets, such as Pascal Context, Cityscapes, and
ShapeNet, shows that our algorithm can achieve, for the first time, competitive
accuracy and certification guarantees on real-world segmentation tasks. We
provide an implementation at https://github.com/eth-sri/segmentation-smoothing.Comment: ICML'2
Standard and Non-Standard Inferences in the Description Logic FLâ‚€ Using Tree Automata
Although being quite inexpressive, the description logic (DL) FLâ‚€, which provides only conjunction, value restriction and the top concept as concept constructors, has an intractable subsumption problem in the presence of terminologies (TBoxes): subsumption reasoning w.r.t. acyclic FLâ‚€ TBoxes is coNP-complete, and becomes even ExpTime-complete in case general TBoxes are used. In the present paper, we use automata working on infinite trees to solve both standard and non-standard inferences in FLâ‚€ w.r.t. general TBoxes. First, we give an alternative proof of the ExpTime upper bound for subsumption in FLâ‚€ w.r.t. general TBoxes based on the use of looping tree automata. Second, we employ parity tree automata to tackle non-standard inference problems such as computing the least common subsumer and the difference of FLâ‚€ concepts w.r.t. general TBoxes
Certified Defenses: Why Tighter Relaxations May Hurt Training
Certified defenses based on convex relaxations are an established technique
for training provably robust models. The key component is the choice of
relaxation, varying from simple intervals to tight polyhedra. Paradoxically,
however, training with tighter relaxations can often lead to worse certified
robustness. The poor understanding of this paradox has forced recent
state-of-the-art certified defenses to focus on designing various heuristics in
order to mitigate its effects. In contrast, in this paper we study the
underlying causes and show that tightness alone may not be the determining
factor. Concretely, we identify two key properties of relaxations that impact
training dynamics: continuity and sensitivity. Our extensive experimental
evaluation demonstrates that these two factors, observed alongside tightness,
explain the drop in certified robustness for popular relaxations. Further, we
investigate the possibility of designing and training with relaxations that are
tight, continuous and not sensitive. We believe the insights of this work can
help drive the principled discovery of new and effective certified defense
mechanisms
Efficient Certification of Spatial Robustness
Recent work has exposed the vulnerability of computer vision models to vector
field attacks. Due to the widespread usage of such models in safety-critical
applications, it is crucial to quantify their robustness against such spatial
transformations. However, existing work only provides empirical robustness
quantification against vector field deformations via adversarial attacks, which
lack provable guarantees. In this work, we propose novel convex relaxations,
enabling us, for the first time, to provide a certificate of robustness against
vector field transformations. Our relaxations are model-agnostic and can be
leveraged by a wide range of neural network verifiers. Experiments on various
network architectures and different datasets demonstrate the effectiveness and
scalability of our method.Comment: Conference Paper at AAAI 202
Latent Space Smoothing for Individually Fair Representations
Fair representation learning encodes user data to ensure fairness and
utility, regardless of the downstream application. However, learning
individually fair representations, i.e., guaranteeing that similar individuals
are treated similarly, remains challenging in high-dimensional settings such as
computer vision. In this work, we introduce LASSI, the first representation
learning method for certifying individual fairness of high-dimensional data.
Our key insight is to leverage recent advances in generative modeling to
capture the set of similar individuals in the generative latent space. This
allows learning individually fair representations where similar individuals are
mapped close together, by using adversarial training to minimize the distance
between their representations. Finally, we employ randomized smoothing to
provably map similar individuals close together, in turn ensuring that local
robustness verification of the downstream application results in end-to-end
fairness certification. Our experimental evaluation on challenging real-world
image data demonstrates that our method increases certified individual fairness
by up to 60%, without significantly affecting task utility
Certified Defense to Image Transformations via Randomized Smoothing
Fair representation learning provides an effective way of enforcing fairness constraints without compromising utility for downstream users. A desirable family of such fairness constraints, each requiring similar treatment for similar individuals, is known as individual fairness. In this work, we introduce the first method thatenables data consumers to obtain certificates of individualfairness for existing andnew data points. The key idea is to map similar individuals toclose latent representations and leverage this latent proximity to certify individual fairness. That is, our method enables the data producer to learn and certify a representation where for a data point all similar individuals are at ℓ∞-distance at most epsilon, thus allowing data consumers to certify individual fairness by proving epsilon-robustness of their classifier. Our experimental evaluation on five real-world datasets and several fairnessconstraints demonstrates the expressivity and scalability of our approach
Universal Approximation with Certified Networks
Training neural networks to be certifiably robust is critical to ensure their safety against adversarial attacks. However, it is currently very difficult to train a neural network that is both accurate and certifiably robust. In this work we take a step towards addressing this challenge. We prove that for every continuous function f, there exists a network n such that: (i) n approximates f arbitrarily close, and (ii) simple interval bound propagation of a region B through n yields a result that is arbitrarily close to the optimal output of f on B. Our result can be seen as a Universal Approximation Theorem for interval-certified ReLU networks. To the best of our knowledge, this is the first work to prove the existence of accurate, interval-certified networks
Standard and Non-Standard Inferences in the Description Logic FLâ‚€ Using Tree Automata
Although being quite inexpressive, the description logic (DL) FLâ‚€, which provides only conjunction, value restriction and the top concept as concept constructors, has an intractable subsumption problem in the presence of terminologies (TBoxes): subsumption reasoning w.r.t. acyclic FLâ‚€ TBoxes is coNP-complete, and becomes even ExpTime-complete in case general TBoxes are used. In the present paper, we use automata working on infinite trees to solve both standard and non-standard inferences in FLâ‚€ w.r.t. general TBoxes. First, we give an alternative proof of the ExpTime upper bound for subsumption in FLâ‚€ w.r.t. general TBoxes based on the use of looping tree automata. Second, we employ parity tree automata to tackle non-standard inference problems such as computing the least common subsumer and the difference of FLâ‚€ concepts w.r.t. general TBoxes